scattering gcn
Scattering GCN: Overcoming Oversmoothness in Graph Convolutional Networks
Graph convolutional networks (GCNs) have shown promising results in processing graph data by extracting structure-aware features. This gave rise to extensive work in geometric deep learning, focusing on designing network architectures that ensure neuron activations conform to regularity patterns within the input graph. However, in most cases the graph structure is only accounted for by considering the similarity of activations between adjacent nodes, which limits the capabilities of such methods to discriminate between nodes in a graph. Here, we propose to augment conventional GCNs with geometric scattering transforms and residual convolutions. The former enables band-pass filtering of graph signals, thus alleviating the so-called oversmoothing often encountered in GCNs, while the latter is introduced to clear the resulting features of high-frequency noise. We establish the advantages of the presented Scattering GCN with both theoretical results establishing the complementary benefits of scattering and GCN features, as well as experimental results showing the benefits of our method compared to leading graph neural networks for semi-supervised node classification, including the recently proposed GAT network that typically alleviates oversmoothing using graph attention mechanisms.
Scattering GCN: Overcoming Oversmoothness in Graph Convolutional Networks - Supplement Yimeng Min Frederik Wenkel Guy Wolf A Proofs and Illustrative Examples of Lemmas 1 and 2
For any node v V, according to Eq. 8, we can write (Px)[v ] = This construction essentially generalizes the graph demonstrated in Figure 1 of the main paper (see Sec. 7). Therefore, computing these boils down to a simple proof by cases: 1. Let us now revisit the example given in the main paper (see Figure 1 or a copy in Figure 1). These responses with appropriate color coding give the illustration in Figure 1 in the main paper. As many applied fields such as Bioinformatics and Neuroscience heavily rely on the analysis of graph-structured data, the study of reliable classification methods has received much attention lately.
Review for NeurIPS paper: Scattering GCN: Overcoming Oversmoothness in Graph Convolutional Networks
This paper works on graph-based semi-supervised learning (more specifically node classification) and proposes to combine a geometric scattering transform into GCNs to overcome the over-smoothing issue of state-of-the-art GCNs for node classification. It also proposes graph residual convolutions to better aggregate the node information. The clarity, novelty, and significance are clearly above the bar of NeurIPS. The authors also did a good job in their rebuttal to address some concerns raised by reviewers. Thus, all of us have agreed to accept this paper for publication!
Review for NeurIPS paper: Scattering GCN: Overcoming Oversmoothness in Graph Convolutional Networks
Additional Feedback: I'd be interested to hear whether the proposed approach could also benefit from an attention mechanism similar to GAT. I wasn't entirely sure about the setup for the experiment where the training size is reduced. Is this taking a fixed graph and then simply hiding an increasing portion of the node labels, or is the graph structure different between the settings with reduced training size? Are the number of nodes for which labels are predicted the same between each setting? Is each unlabelled node always connected to at least one labelled node or does the reduction of training size also mean that the nearest labelled node might be further away in the low training size regime?
Scattering GCN: Overcoming Oversmoothness in Graph Convolutional Networks
Graph convolutional networks (GCNs) have shown promising results in processing graph data by extracting structure-aware features. This gave rise to extensive work in geometric deep learning, focusing on designing network architectures that ensure neuron activations conform to regularity patterns within the input graph. However, in most cases the graph structure is only accounted for by considering the similarity of activations between adjacent nodes, which limits the capabilities of such methods to discriminate between nodes in a graph. Here, we propose to augment conventional GCNs with geometric scattering transforms and residual convolutions. The former enables band-pass filtering of graph signals, thus alleviating the so-called oversmoothing often encountered in GCNs, while the latter is introduced to clear the resulting features of high-frequency noise.
Scattering GCN: Overcoming Oversmoothness in Graph Convolutional Networks
Min, Yimeng, Wenkel, Frederik, Wolf, Guy
Graph convolutional networks (GCNs) have shown promising results in processing graph data by extracting structure-aware features. This gave rise to extensive work in geometric deep learning, focusing on designing network architectures that ensure neuron activations conform to regularity patterns within the input graph. However, in most cases the graph structure is only accounted for by considering the similarity of activations between adjacent nodes, which in turn degrades the results. In this work, we augment GCN models by incorporating richer notions of regularity by leveraging cascades of band-pass filters, known as geometric scatterings. The produced graph features incorporate multiscale representations of local graph structures, while avoiding overly smooth activations forced by previous architectures. Moreover, inspired by skip connections used in residual networks, we introduce graph residual convolutions that reduce high-frequency noise caused by joining together information at multiple scales. Our hybrid architecture introduces a new model for semi-supervised learning on graph-structured data, and its potential is demonstrated for node classification tasks on multiple graph datasets, where it outperforms leading GCN models.